What Kinds of Functions Do Deep Neural Networks Learn? Insights from Variational Spline Theory

نویسندگان

چکیده

We develop a variational framework to understand the properties of functions learned by fitting deep neural networks with rectified linear unit activations data. propose new function space, which is reminiscent classical bounded variation-type spaces, that captures compositional structure associated networks. derive representer theorem showing ReLU are solutions regularized data problems over from this space. The space consists compositions Banach spaces second-order variation in Radon domain. These sparsity-promoting norms, giving insight into role sparsity network have skip connections and rank weight matrices, providing theoretical support for these common architectural choices. problem we study can be recast as finite-dimensional training regularization schemes related notions decay path-norm regularization. Finally, our analysis builds on techniques spline theory, between splines.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

What Do Deep CNNs Learn About Objects?

Deep convolutional neural networks learn extremely powerful image representations, yet most of that power is hidden in the millions of deep-layer parameters. What exactly do these parameters represent? Recent work has started to analyse CNN representations, finding that, e.g., they are invariant to some 2D transformations Fischer et al. (2014), but are confused by particular types of image nois...

متن کامل

Do Deep Neural Networks Suffer from Crowding?

Crowding is a visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called flankers, are placed close to it. In this work, we study the effect of crowding in artificial Deep Neural Networks for object recognition. We analyze both standard deep convolutional neural networks (DCNNs) as well as a new version of DCNN...

متن کامل

Variational Dropout Sparsifies Deep Neural Networks

We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout. We extend Variational Dropout to the case when dropout rates are unbounded, propose a way to reduce the variance of the gradient estimator and report first experimental results with individual dropout rates per weight. Interestingly, it leads to extremely sparse sol...

متن کامل

What Do We Learn from Experimental Algorithmics?

Experimental Algorithmics is concerned with the design, implementation , tuning, debugging and performance analysis of computer programs for solving algorithmic problems. It provides methodologies and tools for designing, developing and experimentally analyzing eecient algorithmic codes and aims at integrating and reinforcing traditional theoretical approaches for the design and analysis of alg...

متن کامل

What do we learn from rating metaphors?

What makes some metaphors easier to understand than others? Theoretical accounts of metaphor processing appeal to dimensions like conventionality and aptness to explain variability in metaphor comprehensibility. In a typical experiment, one group of naive participants rates a set of metaphoric sentences along these dimensions, while another is timed reading the same sentences. Then, the ratings...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM journal on mathematics of data science

سال: 2022

ISSN: ['2577-0187']

DOI: https://doi.org/10.1137/21m1418642